Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Short-term traffic flow prediction based on empirical mode decomposition and long short-term memory neural network
ZHANG Xiaohan, FENG Aimin
Journal of Computer Applications    2021, 41 (1): 225-230.   DOI: 10.11772/j.issn.1001-9081.2020060919
Abstract496)      PDF (1687KB)(539)       Save
Traffic flow prediction is an important part of intelligent transportation. The traffic data to be processed by it are non-linear, periodic, and random, as a result, the unstable traffic flow data depend on long-term data range during data prediction. At the same time, due to some external factors, the original data often contain some noise, which may further lead to the degradation of prediction performance. Aiming at the above problems, a prediction algorithm named EMD-LSTM that can denoise and process long-term dependence was proposed. Firstly, Empirical Mode Decomposition (EMD) was used to decompose different scale components in the traffic time series data gradually to generate a series of intrinsic mode functions with the same feature scale, thereby removing certain noise influence. Then, with the help of Long Short-Term Memory (LSTM) neural network, the problem of long-term dependence of data was solved, so that the algorithm performed more outstanding in long-term field prediction. Experimental results of short-term prediction of actual datasets show that EMD-LSTM has the Mean Absolute Error (MAE) 1.916 32 lower than LSTM, and the Mean Absolute Percentage Error (MAPE) 4.645 45 percentage points lower than LSTM. It can be seen that the proposed hybrid model significantly improves the prediction accuracy and can solve the problem of traffic data effectively.
Reference | Related Articles | Metrics
Customer purchasing power prediction of Google store based on deep LightGBM ensemble learning model
YE Zhiyu, FENG Aimin, GAO Hang
Journal of Computer Applications    2019, 39 (12): 3434-3439.   DOI: 10.11772/j.issn.1001-9081.2019071305
Abstract644)      PDF (892KB)(445)       Save
The ensemble learning models such as Light Gradient Boosting Machine (LightGBM) only mine data information once, and cannot automatically refine the granularity of data mining or obtain more potential internal correlation information in the data by deep digging. In order to solve the problems, a deep LightGBM ensemble learning model was proposed, which was composed of sliding window and deepening. Firstly, the ensemble learning model was able to automatically refine the granularity of data mining through the sliding window, so as to further mine the potential internal correlation information in the data and a certain expressive learning ability was given to the model. Secondly, based on the sliding window, the deepening step was used to further improve the representation learning ability of the model. Finally, the dataset was processed with feature engineering. The experimental results on the dataset of Google store show that, the prediction accuracy of the proposed deep ensemble learning model is 6.16 percentage points higher than that of original ensemble learning model. The proposed method can automatically refine the granularity of data mining, so as to obtain more potential information in the dataset. Moreover, compared with the traditional deep neural network, the deep LightGBM ensemble learning model has fewer parameters and better interpretability as a non-neural network.
Reference | Related Articles | Metrics